Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Monaural speech enhancement based on gated dilated convolutional recurrent network
Xinyuan YOU, Heng WANG
Journal of Computer Applications    2024, 44 (4): 1317-1324.   DOI: 10.11772/j.issn.1001-9081.2023040452
Abstract76)   HTML1)    PDF (1791KB)(74)       Save

The use of contextual information plays an important role in speech enhancement tasks. To address the under-utilization problem of global speech, a Gated Dilated Convolutional Recurrent Network (GDCRN) for complex spectral mapping was proposed. GDCRN was composed of an encoder, a Gated Temporal Convolution Module (GTCM) and a decoder. The encoder and decoder had asymmetric network structure. Firstly, features were processed by the encoder using a Gated Dilated Convolution Module (GDCM), which expanded the receptive field. Secondly, longer contextual information was captured and selectively passed through the use of the GTCM. Finally, the deconvolution combined with a Gated Linear Unit (GLU)was used by the decoder, which was connected to the corresponding convolution layer in the encoder using skip connection. Additionally, a Channel Time-Frequency Attention (CTFA) mechanism was introduced. Experimental results show that the proposed network has fewer parameters and shorter training time than other networks such as Temporal Convolutional Neural Network (TCNN) and Gated Convolutional Recurrent Network (GCRN). The proposed GDCRN significantly improves PESQ (Perceptual Evaluation of Speech Quality) and STOI(Short-Time Objective Intelligibility) up by 0.258 9 and 4.67 percentage points, demonstrating that the proposed network has better enhancement effect and stronger generalization ability.

Table and Figures | Reference | Related Articles | Metrics
Segmentation network for day and night ground-based cloud images based on improved Res-UNet
Boyue WANG, Yingxiang LI, Jiandan ZHONG
Journal of Computer Applications    2024, 44 (4): 1310-1316.   DOI: 10.11772/j.issn.1001-9081.2023040453
Abstract72)   HTML1)    PDF (3059KB)(124)       Save

Aiming at the problems of detail information loss and low segmentation accuracy in the segmentation of day and night ground-based cloud images, a segmentation network called CloudResNet-UNetwork (CloudRes-UNet) for day and night ground-based cloud images based on improved Res-UNet (Residual network-UNetwork) was proposed, in which the overall network structure of encoder-decoder was adopted. Firstly, ResNet50 was used by the encoder to extract features to enhance the feature extraction ability. Then, a Multi-Stage feature extraction (Multi-Stage) module was designed, which combined three techniques of group convolution, dilated convolution and channel shuffle to obtain high-intensity semantic information. Secondly, Efficient Channel Attention Network (ECA?Net) module was added to focus on the important information in the channel dimension, strengthen the attention to the cloud region in the ground-based cloud image, and improve the segmentation accuracy. Finally, bilinear interpolation was used by the decoder to upsample the features, which improved the clarity of the segmented image and reduced the loss of object and position information. The experimental results show that, compared with the state-of-the-art ground-based cloud image segmentation network Cloud-UNetwork (Cloud-UNet) based on deep learning, the segmentation accuracy of CloudRes-UNet on the day and night ground-based cloud image segmentation dataset is increased by 1.5 percentage points, and the Mean Intersection over Union (MIoU) is increased by 1.4 percentage points, which indicates that CloudRes-UNet obtains cloud information more accurately. It has positive significance for weather forecast, climate research, photovoltaic power generation and so on.

Table and Figures | Reference | Related Articles | Metrics
Two-channel progressive feature filtering network for tampered image detection and localization
Shunwang FU, Qian CHEN, Zhi LI, Guomei WANG, Yu LU
Journal of Computer Applications    2024, 44 (4): 1303-1309.   DOI: 10.11772/j.issn.1001-9081.2023040493
Abstract61)   HTML0)    PDF (1982KB)(41)       Save

The existing image tamper detection networks based on deep learning often have problems such as low detection accuracy and weak algorithm transferability. To address the above issues, a two-channel progressive feature filtering network was proposed. Two channels were used to extract the two-domain features of the image in parallel, one of which was used to extract the shallow and deep features of the image spatial domain, and the other channel was used to extract the feature distribution of the image noise domain. At the same time, a progressive subtle feature screening mechanism was used to filter redundant features and gradually locate the tampered regions; in order to extract the tamper mask more accurately, a two-channel subtle feature extraction module was proposed, which combined the subtle features of the spatial domain and the noise domain to generate a more accurate tamper mask. During the decoding process, the localization ability of the network to tampered regions was improved by fusing filtered features of different scales and the contextual information of the network. The experimental results show that in terms of detection and localization, compared with the existing advanced tamper detection networks ObjectFormer, Multi-View multi-Scale Supervision Network (MVSS-Net) and Progressive Spatio-Channel Correlation Network (PSCC-Net), the F1 score of the proposed network is increased by an 10.4, 5.9 and 12.9 percentage points on CASIA V2.0 dataset; when faced with Gaussian low-pass filtering, Gaussian noise, and JPEG compression attacks, compared with Manipulation Tracing Network (ManTra-Net) and Spatial Pyramid Attention Network (SPAN), the Area Under Curve (AUC) of the proposed network is increased by 10.0 and 5.4 percentage points at least. It is verified that the proposed network can effectively solve the problems of low detection accuracy and poor transferability in the tamper detection algorithm.

Table and Figures | Reference | Related Articles | Metrics
3D-GA-Unet: MRI image segmentation algorithm for glioma based on 3D-Ghost CNN
Lijun XU, Hui LI, Zuyang LIU, Kansong CHEN, Weixuan MA
Journal of Computer Applications    2024, 44 (4): 1294-1302.   DOI: 10.11772/j.issn.1001-9081.2023050606
Abstract28)   HTML0)    PDF (3121KB)(33)       Save

Gliomas are the most common primary cranial tumors arising from cancerous changes in the glia of the brain and spinal cord, with a high proportion of malignant gliomas and a significant mortality rate. Quantitative segmentation and grading of gliomas based on Magnetic Resonance Imaging (MRI) images is the main method for diagnosis and treatment of gliomas. To improve the segmentation accuracy and speed of glioma, a 3D-Ghost Convolutional Neural Network (CNN) -based MRI image segmentation algorithm for glioma, called 3D-GA-Unet, was proposed. 3D-GA-Unet was built based on 3D U-Net (3D U-shaped Network). A 3D-Ghost CNN block was designed to increase the useful output and reduce the redundant features in traditional CNNs by using linear operation. Coordinate Attention (CA) block was added, which helped to obtain more image information that was favorable to the segmentation accuracy. The model was trained and validated on the publicly available glioma dataset BraTS2018. The experimental results show that 3D-GA-Unet achieves average Dice Similarity Coefficients (DSCs) of 0.863 2, 0.847 3 and 0.803 6 and average sensitivities of 0.867 6, 0.949 2 and 0.831 5 for Whole Tumor (WT), Tumour Core (TC), and Enhanced Tumour (ET) in glioma segmentation results. It is verified that 3D-GA-Unet can accurately segment glioma images and further improve the segmentation efficiency, which is of positive significance for the clinical diagnosis of gliomas.

Table and Figures | Reference | Related Articles | Metrics
Interstitial lung disease segmentation algorithm based on multi-task learning
Wei LI, Ling CHEN, Xiuyuan XU, Min ZHU, Jixiang GUO, Kai ZHOU, Hao NIU, Yuchen ZHANG, Shanye YI, Yi ZHANG, Fengming LUO
Journal of Computer Applications    2024, 44 (4): 1285-1293.   DOI: 10.11772/j.issn.1001-9081.2023040517
Abstract122)   HTML0)    PDF (3659KB)(152)       Save

Interstitial Lung Disease (ILD) segmentation labels are highly costly, leading to small sample sizes in existing datasets and resulting in poor performance of trained models. To address this issue, a segmentation algorithm for ILD based on multi-task learning was proposed. Firstly, a multi-task segmentation model was constructed based on U-Net. Then, the generated lung segmentation labels were used as auxiliary task labels for multi-task learning. Finally, a method of dynamically weighting the multi-task loss functions was used to balance the losses of the primary task and the secondary task. Experimental results on a self-built ILD dataset show that the Dice Similarity Coefficient (DSC) of the multi-task segmentation model reaches 82.61%, which is 2.26 percentage points higher than that of U-Net. The experimental results demonstrate that the proposed algorithm can improve the segmentation performance of ILD and can assist clinical doctors in ILD diagnosis.

Table and Figures | Reference | Related Articles | Metrics
Video super-resolution reconstruction network based on frame straddling optical flow
Yang LIU, Rong LIU, Ke FANG, Xinyue ZHANG, Guangxu WANG
Journal of Computer Applications    2024, 44 (4): 1277-1284.   DOI: 10.11772/j.issn.1001-9081.2023040523
Abstract73)   HTML0)    PDF (3588KB)(53)       Save

Current Video Super-Resolution (VSR) algorithms cannot fully utilize inter-frame information of different distances when processing complex scenes with large motion amplitude, resulting in difficulty in accurately recovering occlusion, boundaries, and multi-detail regions. A VSR model based on frame straddling optical flow was proposed to solve these problems. Firstly, shallow features of Low-Resolution frames (LR) were extracted through Residual Dense Blocks (RDBs). Then, motion estimation and compensation was performed on video frames using a Spatial Pyramid Network (SPyNet) with straddling optical flows of different time lengths, and deep feature extraction and correction was performed on inter-frame information through RDBs connected in multiple layers. Finally, the shallow and deep features were fused, and High-Resolution frames (HR) were obtained through up-sampling. The experimental results on the REDS4 public dataset show that compared with deep Video Super-Resolution network using Dynamic Upsampling Filters without explicit motion compensation (DUF-VSR), the proposed model improves Peak Signal-to-Noise Ratio (PSNR) and Structure Similarity Index Measure (SSIM) by 1.07 dB and 0.06, respectively. The experimental results show that the proposed model can effectively improve the quality of video image reconstruction.

Table and Figures | Reference | Related Articles | Metrics
Image aesthetic quality evaluation method based on self-supervised vision Transformer
Rong HUANG, Junjie SONG, Shubo ZHOU, Hao LIU
Journal of Computer Applications    2024, 44 (4): 1269-1276.   DOI: 10.11772/j.issn.1001-9081.2023040540
Abstract100)   HTML0)    PDF (3071KB)(113)       Save

The existing image aesthetic quality evaluation methods widely use Convolution Neural Network (CNN) to extract image features. Limited by the local receptive field mechanism, it is difficult for CNN to extract global features from a given image, thereby resulting in the absence of aesthetic attributes like global composition relations, global color matching and so on. In order to solve this problem, an image aesthetic quality evaluation method based on SSViT (Self-Supervised Vision Transformer) model was proposed. Self-attention mechanism was utilized to establish long-distance dependencies among local patches of the image and to adaptively learn their correlations, and extracted the global features so as to characterize the aesthetic attributes. Meanwhile, three tasks of perceiving the aesthetic quality, namely classifying image degradation, ranking image aesthetic quality, and reconstructing image semantics, were designed to pre-train the vision Transformer in a self-supervised manner using unlabeled image data, so as to enhance the representation of global features. The experimental results on AVA (Aesthetic Visual Assessment) dataset show that the SSViT model achieves 83.28%, 0.763 4, 0.746 2 on the metrics including evaluation accuracy, Pearson Linear Correlation Coefficient (PLCC) and SRCC (Spearman Rank-order Correlation Coefficient), respectively. These experimental results demonstrate that the SSViT model achieves higher accuracy in image aesthetic quality evaluation.

Table and Figures | Reference | Related Articles | Metrics
Code clone detection based on dependency enhanced hierarchical abstract syntax tree
Zexuan WAN, Chunli XIE, Quanrun LYU, Yao LIANG
Journal of Computer Applications    2024, 44 (4): 1259-1268.   DOI: 10.11772/j.issn.1001-9081.2023040485
Abstract87)   HTML1)    PDF (1734KB)(64)       Save

In the field of software engineering, code clone detection methods based on semantic similarity can reduce the cost of software maintenance and prevent system vulnerabilities. As a typical form of code abstract representation, Abstract Syntax Tree (AST) has achieved success in code clone detection tasks of many program languages. However, the existing work mainly uses the original AST to extract code semantics, and does not dig deep semantic and structural information in AST. To solve the above problem, a code clone detection method based on Dependency Enhanced Hierarchical Abstract Syntax Tree (DEHAST) was proposed. Firstly, the AST was layered and divided into different semantic levels. Secondly, corresponding dependency enhancement edges were added to different levels of AST to construct DEHAST, thus a simple AST was transformed into a heterogeneous graph with richer program semantics. Finally, a Graph Matching Network (GMN) model was used to detect the similarity of heterogeneous graphs to achieve code clone detection. Experimental results on two datasets BigCloneBench and Google Code Jam show that DEHAST is able to detect 100% of Type-1 and Type-2 code clones, 99% of Type-3 code clones, and 97% of Type-4 code clones; compared with the tree based method ASTNN (AST-based Neural Network), the F1 values all increase by 4 percentage points. Therefore, DEHAST can effectively perform code semantic clone detection.

Table and Figures | Reference | Related Articles | Metrics
Survey of code similarity detection technology
Xiangjie SUN, Qiang WEI, Yisen WANG, Jiang DU
Journal of Computer Applications    2024, 44 (4): 1248-1258.   DOI: 10.11772/j.issn.1001-9081.2023040551
Abstract157)   HTML0)    PDF (1868KB)(117)       Save

Code reuse not only brings convenience to software development, but also introduces security risks, such as accelerating vulnerability propagation and malicious code plagiarism. Code similarity detection technology is to calculate code similarity by analyzing lexical, syntactic, semantic and other information between codes. It is one of the most effective technologies to judge code reuse, and it is also a program security analysis technology that has developed rapidly in recent years. First, the latest technical progress of code similarity detection was systematically reviewed, and the current code similarity detection technology was classified. According to whether the target code was open source, it was divided into source code similarity detection and binary code similarity detection. According to the different programming languages and instruction sets, the second subdivision was carried out. Then, the ideas and research results of each technology were summarized, the successful cases of machine learning technology in the field of code similarity detection were analyzed, and the advantages and disadvantages of existing technologies were discussed. Finally, the development trend of code similarity detection technology was given to provide reference for relevant researchers.

Table and Figures | Reference | Related Articles | Metrics
Resource allocation algorithm for low earth orbit satellites oriented to user demand
Fatang CHEN, Miao HUANG, Yufeng JIN
Journal of Computer Applications    2024, 44 (4): 1242-1247.   DOI: 10.11772/j.issn.1001-9081.2023050561
Abstract95)   HTML0)    PDF (2078KB)(26)       Save

In Low Earth orbit (LEO)satellite multi-beam communication scenario, the traditional fixed resource allocation algorithm can not meet the differences in channel capacity requirements of different users. In order to meet the requirements of users, the optimization model of minimum supply-demand difference of combining channel allocation, bandwidth allocation and power allocation was established, and Pattern Division Multiple Access technology (PDMA)was introduced to improve the utilization of channel resources. In view of the non-convex characteristic of the model, the optimal resource allocation strategy learned by the Q-learning algorithm was used to allocate the channel capacity suitable for each user, and a reward threshold was introduced to further improve the algorithm, speeding up the convergence and minimizing the difference between supply and demand when the algorithm converged. The simulation results show that the convergence speed of the improved algorithm is about 3.33 times that before improvement; the improved algorithm can meet larger user requirement, about 14% higher than the Q-learning algorithm before improvement, about 2.14 times that of the traditional fixed algorithm.

Table and Figures | Reference | Related Articles | Metrics
Secondary signal detection algorithm for high-speed mobile environments
Huahua WANG, Xu ZHANG, Feng LI
Journal of Computer Applications    2024, 44 (4): 1236-1241.   DOI: 10.11772/j.issn.1001-9081.2023050580
Abstract68)   HTML0)    PDF (2710KB)(50)       Save

Orthogonal Time Sequency Multiplexing (OTSM) achieves transmission performance similar to Orthogonal Time Frequency Space (OTFS) modulation with lower complexity, providing a promising solution for future high-speed mobile communication systems that require low complexity transceivers. To address the issue of insufficient efficiency in existing time-domain based Gauss-Seidel (GS) iterative equalization, a secondary signal detection algorithm was proposed. First, Linear Minimum Mean Square Error (LMMSE) detection with low complexity was performed in the time domain, and then Successive Over Relaxation (SOR) iterative algorithm was used to further eliminate residual symbol interference. To further optimize convergence efficiency and detection performance, the SOR algorithm was linearly optimized to obtain an Improved SOR (ISOR) algorithm. The simulation experimental results show that compared with SOR algorithm, ISOR algorithm improves detection performance and accelerates convergence while increasing lower complexity. Compared with GS iterative algorithm, ISOR algorithm has a gain of 1.61 dB when using 16 QAM modulation with a bit error rate of 10 - 4 .

Table and Figures | Reference | Related Articles | Metrics
MAC layer scheduling strategy of roadside units based on MEC server priority service
Xin LI, Liyong BAO, Hongwei DING, Zheng GUAN
Journal of Computer Applications    2024, 44 (4): 1227-1235.   DOI: 10.11772/j.issn.1001-9081.2023050556
Abstract43)   HTML0)    PDF (3959KB)(40)       Save

Aiming at the Multi-access Edge Computing (MEC) server data transmission requirements of high reliability, low latency and large data volume, a Media Access Control (MAC) scheduling strategy based on conflict-free access, priority architecture and elastic service technology for the vehicle edge computing scenario was proposed. The proposed strategy was based on the centralized coordination of channel access rights by the Road Side Unit (RSU) of the Internet of Vehicles (IoV), which prioritized the link transmission quality between the On Board Unit (OBU) and the MEC server in the vehicle network, so that the Vehicle-to-Network (V2N) service data could be transmitted in a timely manner. At the same time, an elastic service approach was adopted for services between local OBUs to enhance the reliability of emergency message transmission when dense vehicles were accessed. First, a queuing analysis model was constructed for the scheduling strategy. Then, the embedded Markov chains were established according to the non-aftereffect characteristics of the system state variables at each moment, and the system was analyzed theoretically by the analysis method of probability generating functions to obtain the exact analytical expressions of key indicators such as the average queue length, and the average waiting latency of MEC server communication units and OBUs, and RSU query period. Computer simulation experimental results show that the statistical analysis results are consistent with the theoretical calculation results, and the proposed scheduling strategy can improve the stability and flexibility of the IoV under high load conditions.

Table and Figures | Reference | Related Articles | Metrics
Improved DV-Hop localization model based on multi-scenario
Han SHEN, Zhongsheng WANG, Zhou ZHOU, Changyuan WANG
Journal of Computer Applications    2024, 44 (4): 1219-1227.   DOI: 10.11772/j.issn.1001-9081.2023040486
Abstract30)   HTML1)    PDF (4541KB)(7)       Save

Considering the low positioning accuracy and strong scene dependence of optimization strategy in the Distance Vector Hop (DV-Hop) localization model, an improved DV-Hop model, Function correction Distance Vector Hop (FuncDV-Hop) based on function analysis and determining coefficients by simulation was presented. First, the average hop distance, distance estimation, and least square error in the DV-Hop model were analyzed. The following concepts were introduced: undetermined coefficient optimization, step function segmentation experiment, weight function approach using equivalent points, and modified maximum likelihood estimation. Then, in order to design control trials, the number of nodes, the proportion of beacon nodes, the communication radius, the number of beacon nodes, and the number of unknown nodes were all designed for multi-scenario comparison experiments by using the control variable technique. Finally, the experiment was split into two phases:determining coefficients by simulation and integrated optimization testing. Compared with the original DV-Hop model, the positioning accuracy of the final improved strategy is improved by 23.70%-75.76%, and the average optimization rate is 57.23%. The experimental results show that, the optimization rate of FuncDV-Hop model is up to 50.73%, compared with the DV-Hop model based on genetic algorithm and neurodynamic improvement, the positioning accuracy of FuncDV-Hop model is increased by 0.55%-18.77%. The proposed model does not introduce other parameters, does not increase the protocol overhead of Wireless Sensor Networks (WSN), and effectively improves the positioning accuracy.

Table and Figures | Reference | Related Articles | Metrics
Energy efficiency optimization mechanism for UAV-assisted and non-orthogonal multiple access-enabled data collection system
Rui TANG, Shibo YUE, Ruizhi ZHANG, Chuan LIU, Chuanlin PANG
Journal of Computer Applications    2024, 44 (4): 1209-1218.   DOI: 10.11772/j.issn.1001-9081.2023040482
Abstract50)   HTML0)    PDF (2575KB)(19)       Save

In the Unmanned Aerial Vehicle (UAV)-assisted and Non-Orthogonal Multiple Access (NOMA)-enabled data collection system, the total energy efficiency of all sensors is maximized by jointly optimizing the three-dimensional placement design of the UAVs and the power allocation of sensors under the ground-air probabilistic channel model and the quality-of-service requirements. To solve the original mixed-integer non-convex programming problem, an energy efficiency optimization mechanism was proposed based on convex optimization theory, deep learning theory and Harris Hawk Optimization (HHO) algorithm. Under any given three-dimensional placement of the UAVs, first, the power allocation sub-problem was equivalently transformed into a convex optimization problem. Then, based on the optimal power allocation strategy, the Deep Neural Network (DNN) was applied to construct the mapping from the positions of the sensors to the three-dimensional placement of the UAVs, and the HHO algorithm was further utilized to train the model parameters corresponding to the optimal mapping offline. The trained mechanism only involved several algebraic operations and needed to solve a single convex optimization problem. Simulation experimental results show that compared with the travesal search mechanism based on particle swarm optimization algorithm, the proposed mechanism reduces the average operation time by 5 orders of magnitude while sacrificing only about 4.73% total energy efficiency in the case of 12 sensors.

Table and Figures | Reference | Related Articles | Metrics
Energy-spectrum efficiency trade-off for multi-cognitive relay network with decode-and-forward full-duplex maximum energy harvesting
Zhipeng MAO, Runhe QIU
Journal of Computer Applications    2024, 44 (4): 1202-1208.   DOI: 10.11772/j.issn.1001-9081.2023040534
Abstract22)   HTML0)    PDF (2370KB)(8)       Save

In a full-duplex multi-cognitive relay network supported by Simultaneous Wireless Information and Power Transfer (SWIPT), in order to maximize energy-spectrum efficiency, the relay with the maximum energy harvesting was selected for decoding and forwarding, thus forming an energy-spectrum efficiency trade-off optimization problem. The problem was transformed into a convex optimization problem by variable transformation and concave-convex process optimization method. When the trade-off factor was 0, the optimization problem was equivalent to the optimization problem of maximizing the Spectrum Efficiency (SE). When the trade-off factor was 1, the optimization problem was equivalent to the problem of minimizing the energy consumed by the system. In order to solve this optimization problem, an improved algorithm that could directly obtain the trade-off factor for maximizing Energy Efficiency (EE) was proposed, which was optimized by combining the source node transmit power and the power split factor. The proposed algorithm was divided into two steps. First, the power split factor was fixed, and the source node transmit power and trade-off factor that made the EE optimal were obtained. Then, the optimal source node transmit power was fixed, and the optimal power split factor was obtained by using the relationship between energy-spectrum efficiency and power split factor. Through simulation experimental results, it is found that the relay network with the maximum energy harvesting is better in EE and SE than the network composed of other relays. Compared with the method of only optimizing the transmit power, the proposed algorithm increases the EE by more than 63%, and increases the SE by more than 30%; its EE and SE are almost the same as the exhaustive method, and the proposed algorithm converges faster.

Table and Figures | Reference | Related Articles | Metrics
Robust resource allocation optimization in cognitive wireless network integrating information communication and over-the-air computation
Hualiang LUO, Quanzhong LI, Qi ZHANG
Journal of Computer Applications    2024, 44 (4): 1195-1202.   DOI: 10.11772/j.issn.1001-9081.2023050573
Abstract154)   HTML1)    PDF (1373KB)(61)       Save

To address the power resource limitations of wireless sensors in over-the-air computation networks and the spectrum competition with existing wireless information communication networks, a cognitive wireless network integrating information communication and over-the-air computation was studied, in which the primary network focused on wireless information communication, and the secondary network aimed to support over-the-air computation where the sensors utilized signals sent by the base station of the primary network for energy harvesting. Considering the constraints of the Mean Square Error (MSE) of over-the-air computation and the transmit power of each node in the network, base on the random channel uncertainty, a robust resource optimization problem was formulated, with the objective function of maximizing the sum rate of wireless information communication users. To solve the robust optimization problem effectively, an Alternating Optimization (AO)-Improved Constrained Stochastic Successive Convex Approximation (ICSSCA) algorithm called AO-ICSSCA,was proposed, by which the original robust optimization problem was transformed into deterministic optimization sub-problems, and the downlink beamforming vector of the base station in the primary network, the power factors of the sensors, and the fusion beamforming vector of the fusion center in the secondary network were alternately optimized. Simulation experimental results demonstrate that AO-ICSSCA algorithm achieves superior performance with less computing time compared to the Constrained Stochastic Successive Convex Approximation (CSSCA) algorithm before improvement.

Table and Figures | Reference | Related Articles | Metrics
Hybrid NSGA-Ⅱ for vehicle routing problem with multi-trip pickup and delivery
Jianqiang LI, Zhou HE
Journal of Computer Applications    2024, 44 (4): 1187-1194.   DOI: 10.11772/j.issn.1001-9081.2023101512
Abstract27)   HTML0)    PDF (1477KB)(19)       Save

Concerning the trade-off between convergence and diversity in solving the multi-trip pickup and delivery Vehicle Routing Problem (VRP), a hybrid Non-dominated Sorting Genetic Algorithm Ⅱ (NSGA-Ⅱ) combining Adaptive Large Neighborhood Search (ALNS) algorithm and Adaptive Neighborhood Selection (ANS), called NSGA-Ⅱ-ALNS-ANS, was proposed. Firstly, considering the influence of the initial population on the convergence speed of the algorithm, an improved regret insertion method was employed to obtain high-quality initial population. Secondly, to improve global and local search capabilities of the algorithm, various destroy-repair operators and neighborhood structures were designed, according to the characteristics of the pickup and delivery problem. Finally, a Best Fit Decreasing (BFD) algorithm based on random sampling and an efficient feasible solution evaluation criterion were proposed to generate vehicle routing schemes. The simulation experiments were conducted on public benchmark instances of different scales, in the comparison experiments with the MA (Memetic Algorithm), the optimal solution quality of the proposed algorithm increased by 27%. The experimental results show that the proposed algorithm can rapidly generate high-quality vehicle routing schemes that satisfy multiple constraints, and outperform the existing algorithms in terms of both convergence and diversity.

Table and Figures | Reference | Related Articles | Metrics
Potential barrier estimation criterion based on quantum dynamics framework of optimization algorithm
Yaqin CHEN, Peng WANG
Journal of Computer Applications    2024, 44 (4): 1180-1186.   DOI: 10.11772/j.issn.1001-9081.2023040553
Abstract63)   HTML1)    PDF (2696KB)(47)       Save

Quantum Dynamics Framework (QDF) is a basic iterative process of optimization algorithm with representative and universal significance, which is obtained under the quantum dynamics model of optimization algorithm. Differential acceptance is an important mechanism to avoid the optimization algorithm falling into local optimum and to solve the premature convergence problem of the algorithm. In order to introduce the differential acceptance mechanism into the QDF, based on the quantum dynamics model, the differential solution was regarded as a potential barrier encountered in the process of particle motion, and the probability of particles penetrating the potential barrier was calculated by using the transmission coefficient in the quantum tunneling effect. Thus, the differential acceptance criterion of quantum dynamics model was obtained: Potential Barrier Estimation Criterion (PBEC). PBEC was related to the height and width of the potential barrier and the quality of the particles. Compared with the classical Metropolis acceptance criterion, PBEC can comprehensively estimate the behavior of the optimization algorithm when it encounters the differential solution during sampling. The experimental results show that, the QDF algorithm based on PBEC has stronger ability to jump out of the local optimum and higher search efficiency than the QDF algorithm based on Metropolis acceptance criterion, and PBEC is a feasible and effective differential acceptance mechanism in quantum optimization algorithms.

Table and Figures | Reference | Related Articles | Metrics
DFS-Cache: memory-efficient and persistent client cache for distributed file systems
Ruixuan NI, Miao CAI, Baoliu YE
Journal of Computer Applications    2024, 44 (4): 1172-1180.   DOI: 10.11772/j.issn.1001-9081.2023050590
Abstract115)   HTML0)    PDF (3096KB)(98)       Save

To effectively reduce cache defragmentation overhead and improve cache hit radio in data-intensive workflows, a persistent client cache for distributed file system was proposed, namely DFS-Cache (Distributed File System Cache), which was designed and implemented based on Non-Volatile Memory (NVM) and was able to ensure data persistence and crash consistency with significantly reducing cold start time. DFS-Cache was consisted of a cache defragmentation mechanism based on virtual memory remapping and a cache space management strategy based on Time-To-Live (TTL). The former was based on the characteristic that NVM could be directly addressed by the memory controller. By dynamically modifying the mapping relationship between virtual addresses and physical addresses, zero-copy memory defragmentation was achieved. The latter was a cold-hot separated grouping management strategy that could enhance cache space management efficiency with the support of the remapping-based cache defragmentation mechanism. Experiments were conducted using real Intel Optane persistent memory devices. Compared with commercial distributed file systems MooseFS and GlusterFS, while employing standard benchmarking programs like Fio and Filebench, the proposed client cache can increase the system throughput by up to 5.73 times and 1.89 times.

Table and Figures | Reference | Related Articles | Metrics
Security analysis of PFP algorithm under quantum computing model
Yanjun LI, Xiaoyu JING, Huiqin XIE, Yong XIANG
Journal of Computer Applications    2024, 44 (4): 1166-1171.   DOI: 10.11772/j.issn.1001-9081.2023050576
Abstract74)   HTML0)    PDF (1376KB)(43)       Save

The rapid development of quantum technology and the continuous improvement of quantum computing efficiency, especially the emergence of Shor algorithm and Grover algorithm, greatly threaten the security of traditional public key cipher and symmetric cipher. The block cipher PFP algorithm designed based on Feistel structure was analyzed. First, the linear transformation P of the round function was fused into the periodic functions in the Feistel structure, then four 5-round periodic functions of PFP were obtained, two rounds more than periodic functions in general Feistel structure, which was verified through experiments. Furthermore, by using quantum Grover and Simon algorithms, with a 5-round periodic function as the distinguisher, the security of 9, 10-round PFP was evaluated by analyzing the characteristics of PFP key arrangement algorithm. The time complexity required for key recovery is 226, 238.5, the quantum resource required is 193, 212 qubits, and the 58, 77 bits key can be restored, which are superior to the existing impossible differential analysis results.

Table and Figures | Reference | Related Articles | Metrics
Domain transfer intrusion detection method for unknown attacks on industrial control systems
Haoran WANG, Dan YU, Yuli YANG, Yao MA, Yongle CHEN
Journal of Computer Applications    2024, 44 (4): 1158-1165.   DOI: 10.11772/j.issn.1001-9081.2023050566
Abstract120)   HTML0)    PDF (2452KB)(88)       Save

Aiming at the problems of lack of Industrial Control System (ICS) data and poor detection of unknown attacks by industrial control intrusion detection systems, an unknown attack intrusion detection method for industrial control systems based on Generative Adversarial Transfer Learning network (GATL) was proposed. Firstly, causal inference and cross-domain feature mapping relations were introduced to reconstruct the data to improve its understandability and reliability. Secondly, due to the data imbalance between source domain and target domain, domain confusion-based conditional Generative Adversarial Network (GAN) was used to increase the size and diversity of the target domain dataset. Finally, the differences and commonalities of the data were fused through domain adversarial transfer learning to improve the detection and generalization capabilities of the industrial control intrusion detection model for unknown attacks in the target domain. The experimental results show that on the standard dataset of industrial control network, GATL has an average F1-score of 81.59% in detecting unknown attacks in the target domain while maintaining a high detection rate of known attacks, which is 63.21 and 64.04 percentage points higher than the average F1-score of Dynamic Adversarial Adaptation Network (DAAN) and Information-enhanced Adversarial Domain Adaptation (IADA) method, respectively.

Table and Figures | Reference | Related Articles | Metrics
Data classified and graded access control model based on master-slave multi-chain
Meihong CHEN, Lingyun YUAN, Tong XIA
Journal of Computer Applications    2024, 44 (4): 1148-1157.   DOI: 10.11772/j.issn.1001-9081.2023040529
Abstract82)   HTML0)    PDF (3335KB)(60)       Save

In order to solve the problems of slow accurate search speed due to mixed data storage and difficult security governance caused by unclassified and graded data management, a data classified and graded access control model based on master-slave multi-chain was built to achieve classified and graded protection of data and dynamic secure access. Firstly, a hybrid on-chain and off-chain trusted storage model was constructed to balance the storage bottleneck faced by blockchain. Secondly, a master-slave multi-chain architecture was proposed and smart contracts were designed to automatically store data with different privacy levels in the slave chain. Finally, based on Role-Based Access Control, a Multi-Chain and Level Policy-Role Based Access Control (MCLP-RBAC) mechanism was constructed and its specific access control process design was provided. Under the graded access control policy, the throughput of the proposed model is stabilized at around 360 TPS (Transactions Per Second). Compared with the BC-BLPM scheme, it has a certain superiority in throughput, with the ratio of sending rate to throughput reaching 1∶1. Compared with no access strategy, the memory consumption is reduced by about 35.29%; compared with the traditional single chain structure, the memory average consumption is reduced by 52.03%. And compared with the scheme with all the data on the chain, the average storage space is reduced by 36.32%. The experimental results show the proposed model can effectively reduce the storage burden, achieve graded secure access, and suitable for the management of multi-class data with high scalability.

Table and Figures | Reference | Related Articles | Metrics
Blockchain consensus improvement algorithm based on BDLS
Lipeng ZHAO, Bing GUO
Journal of Computer Applications    2024, 44 (4): 1139-1147.   DOI: 10.11772/j.issn.1001-9081.2023050581
Abstract82)   HTML0)    PDF (4688KB)(105)       Save

To solve the problem of low consensus efficiency of Blockchain version of DLS (BDLS) consensus algorithm in a system with a large number of nodes and hierarchy, an blockchain consensus improvement algorithm HBDLS (Hierarchical Blockchain version of DLS) based on BDLS was proposed. Firstly, nodes were divided into two levels according to the attributes of nodes in practical applications. Each high-level node managed a low-level node cluster respectively. Then, cluster consensus was carried out on all lower-level nodes, and the consensus results were reported to the corresponding higher-level nodes. Finally, the consensus results of all the high-level nodes to the lower level nodes were agreed again, and the data passed the high-level consensus was written into the blockchain. Theoretical analysis and simulation experimental results show that in the case of 36 nodes and a single block containing 4 500 transactions, the throughput of HBDLS is about 21% higher than that of BDLS algorithm; in the case of 44 nodes and a single block containing 3 000 transactions, the throughput of HBDLS is about 52% higher than that of BDLS algorithm; in the case of 44 nodes and a single block containing 1 transaction, the consensus latency of HBDLS is about 26% lower than that of BDLS algorithm. Experimental results show that HBDLS can significantly improve the consensus efficiency for the system with a large number of nodes and a large transaction volume.

Table and Figures | Reference | Related Articles | Metrics
Fuzzy clustering algorithm based on belief subcluster cutting
Yu DING, Hanlin ZHANG, Rong LUO, Hua MENG
Journal of Computer Applications    2024, 44 (4): 1128-1138.   DOI: 10.11772/j.issn.1001-9081.2023050610
Abstract39)   HTML3)    PDF (4644KB)(20)       Save

Belief Peaks Clustering (BPC) algorithm is a new variant of Density Peaks Clustering (DPC) algorithm based on fuzzy perspective. It uses fuzzy mathematics to describe the distribution characteristics and correlation of data. However, BPC algorithm mainly relies on the information of local data points in the calculation of belief values, instead of investigating the distribution and structure of the whole dataset. Moreover, the robustness of the original allocation strategy is weak. To solve these problems, a fuzzy Clustering algorithm based on Belief Subcluster Cutting (BSCC) was proposed by combining belief peaks and spectral method. Firstly, the dataset was divided into many high-purity subclusters by local belief information. Then, the subcluster was regarded as a new sample, and the spectral method was used for cutting graph clustering through the similarity relationship between clusters, thus coupling local information and global information. Finally, the points in the subcluster were assigned to the class cluster where the subcluster was located to complete the final clustering. Compared with BPC algorithm, BSCC has obvious advantages on datasets with multiple subclusters, and it has the ACCuracy (ACC) improvement of 16.38 and 21.35 percentage points on americanflag dataset and Car dataset, respectively. Clustering experimental results on synthetic datasets and real datasets show that BSCC outperforms BPC and the other seven clustering algorithms on the three evaluation indicators of Adjusted Rand Index (ARI), Normalized Mutual Information (NMI) and ACC.

Table and Figures | Reference | Related Articles | Metrics
Recommendation method based on knowledge‑awareness and cross-level contrastive learning
Jie GUO, Jiayu LIN, Zuhong LIANG, Xiaobo LUO, Haitao SUN
Journal of Computer Applications    2024, 44 (4): 1121-1127.   DOI: 10.11772/j.issn.1001-9081.2023050613
Abstract86)   HTML0)    PDF (968KB)(51)       Save

As a kind of side information, Knowledge Graph (KG) can effectively improve the recommendation quality of recommendation models, but the existing knowledge-awareness recommendation methods based on Graph Neural Network (GNN) suffer from unbalanced utilization of node information. To address the above problem, a new recommendation method based on Knowledge?awareness and Cross-level Contrastive Learning (KCCL) was proposed. To alleviate the problem of unbalanced node information utilization caused by the sparse interaction data and noisy knowledge graph that deviate from the true representation of inter-node dependencies during information aggregation, a contrastive learning paradigm was introduced into knowledge-awareness recommendation model of GNN. Firstly, the user-item interaction graph and the item knowledge graph were integrated into a heterogeneous graph, and the node representations of users and items were realized by a GNN based on the graph attention mechanism. Secondly, consistent noise was added to the information propagation aggregation layer for data augmentation to obtain node representations of different levels, and the obtained outermost node representation was compared with the innermost node representation for cross-level contrastive learning. Finally, the supervised recommendation task and the contrastive learning assistance task were jointly optimized to obtain the final representation of each node. Experimental results on DBbook2014 and MovieLens-1m datasets show that compared to the second prior contrastive method, the Recall@10 of KCCL is improved by 3.66% and 0.66%, respectively, and the NDCG@10 is improved by 3.57% and 3.29%, respectively, which verifies the effectiveness of KCCL.

Table and Figures | Reference | Related Articles | Metrics
Bird recognition algorithm based on attention mechanism
Tianhua CHEN, Jiaxuan ZHU, Jie YIN
Journal of Computer Applications    2024, 44 (4): 1114-1120.   DOI: 10.11772/j.issn.1001-9081.2023081042
Abstract111)   HTML2)    PDF (2874KB)(101)       Save

Aiming at the low accuracy problem of existing algorithms for fine-grained target bird recognition tasks, a target detection algorithm for bird targets called YOLOv5-Bird, was proposed. Firstly, a mixed domain based Coordinate Attention (CA) mechanism was introduced in the backbone of YOLOv5 to increase the weights of valuable channels and distinguish the features of the target from the redundant features in the background. Secondly, Bi-level Routing Attention (BRA) modules were used to replace part C3 modules in the original backbone to filter the low correlated key-value pair information and obtain efficient long-distance dependencies. Finally, WIoU (Wise-Intersection over Union) function was used as loss function to enhance the localization ability of algorithm. Experimental results show that the detection precision of YOLOv5-Bird reaches 82.8%, and the recall reaches 77.0% on the self-constructed dataset, which are 4.3 and 7.6 percentage points higher than those of YOLOv5 algorithm. Compared with the algorithms adding other attention mechanisms, YOLOv5-Bird also has performance advantages.It is verified that YOLOv5-Bird has better performance in bird target detection scenarios.

Table and Figures | Reference | Related Articles | Metrics
Image classification algorithm based on overall topological structure of point cloud
Jie WANG, Hua MENG
Journal of Computer Applications    2024, 44 (4): 1107-1113.   DOI: 10.11772/j.issn.1001-9081.2023050563
Abstract75)   HTML4)    PDF (2456KB)(77)       Save

Convolutional Neural Network (CNN) is sensitive to the local features of data due to the complex classification boundaries and too many parameters. As a result, the accuracy of CNN model will decrease significantly when it is attacked by adversarial attacks. However, the Topological Data Analysis (TDA) method pays more attention to the macro features of data, which naturally can resist noise and gradient attacks. Therefore, an image classification algorithm named MCN (Mapper-Combined neural Network) combining topological data analysis and CNN was proposed. Firstly, the Mapper algorithm was used to obtain the Mapper map that described the macro features of the dataset. Each sample point was represented by a new feature using a multi-view Mapper map, and the new feature was represented as a binary vector. Then, the hidden layer feature was enhanced by combining the new feature with the hidden layer feature extracted by the CNN. Finally, the feature-enhanced sample data was used to train the fully connected classification network to complete the image classification task. Comparing MCN with pure convolutional network and single Mapper feature classification algorithm on MNIST and FashionMNIST data sets, the initial classification accuracy of the MCN with PCA (Principal Component Analysis) dimension reduction is improved by 4.65% and 8.05%, the initial classification accuracy of the MCN with LDA (Linear Discriminant Analysis) dimensionality reduction is improved by 8.21% and 5.70%. Experimental results show that MCN has higher classification accuracy and stronger anti-attack capability.

Table and Figures | Reference | Related Articles | Metrics
Re-weighted adversarial variational autoencoder and its application in industrial causal effect estimation
Zongyu LI, Siwei QIANG, Xiaobo GUO, Zhenfeng ZHU
Journal of Computer Applications    2024, 44 (4): 1099-1106.   DOI: 10.11772/j.issn.1001-9081.2023050557
Abstract208)   HTML1)    PDF (2192KB)(30)       Save

Counterfactual prediction and selection bias are major challenges in causal effect estimation. To effectively represent the complex mixed distribution of potential covariant and enhance the generalization ability of counterfactual prediction, a Re-weighted adversarial Variational AutoEncoder Network (RVAENet) model was proposed for industrial causal effect estimation. To address bias problem in mixed distribution, the idea of domain adaptation was adopted, and an adversarial learning mechanism was used to balance the representation learning distribution of the latent variables obtained by the Variational AutoEncoder (VAE). Furthermore, the sample propensity weights were learned to re-weight the samples, reducing the distribution difference between the treatment group and the control group. The experimental results show that, in two scenarios of the industrial real-world datasets, the Areas Under Uplift Curve (AUUC) of the proposed model are improved by 15.02% and 16.02% compared to TEDVAE (Treatment Effect with Disentangled VAE). On the public datasets, the proposed model generally achieves optimal results for Average Treatment Effect (ATE) and Precision in Estimation of Heterogeneous Effect (PEHE).

Table and Figures | Reference | Related Articles | Metrics
Location control method for generated objects by diffusion model with exciting and pooling attention
Jinsong XU, Ming ZHU, Zhiqiang LI, Shijie GUO
Journal of Computer Applications    2024, 44 (4): 1093-1098.   DOI: 10.11772/j.issn.1001-9081.2023050634
Abstract102)   HTML3)    PDF (2886KB)(45)       Save

Due to the ambiguity of text and the lack of location information in training data, current state-of-the-art diffusion model cannot accurately control the locations of generated objects in the image under the condition of text prompts. To address this issue, a spatial condition of the object’s location range was introduced, and an attention-guided method was proposed based on the strong correlation between the cross-attention map in U-Net and the image spatial layout to control the generation of the attention map, thus controlling the locations of the generated objects. Specifically, based on the Stable Diffusion (SD) model, in the early stage of the generation of the cross-attention map in the U-Net layer, a loss was introduced to stimulate high attention values in the corresponding location range, and reduce the average attention value outside the range. The noise vector in the latent space was optimized step by step in each denoising step to control the generation of the attention map. Experimental results show that the proposed method can significantly control the locations of one or more objects in the generated image, and when generating multiple objects, it can reduce the phenomenon of object omission, redundant object generation, and object fusion.

Table and Figures | Reference | Related Articles | Metrics
Point cloud semantic segmentation based on attention mechanism and global feature optimization
Pengfei ZHANG, Litao HAN, Hengjian FENG, Hongmei LI
Journal of Computer Applications    2024, 44 (4): 1086-1092.   DOI: 10.11772/j.issn.1001-9081.2023050588
Abstract124)   HTML2)    PDF (1971KB)(81)       Save

In the 3D point cloud semantic segmentation algorithm based on deep learning, to enhance the fine-grained ability to extract local features and learn the long-range dependencies between different local neighborhoods, a neural network based on attention mechanism and global feature optimization was proposed. First, a Single-Channel Attention (SCA) module and a Point Attention (PA) module were designed in the form of additive attention. The former strengthened the resolution of local features by adaptively adjusting the features of each point in a single channel, and the latter adjusted the importance of the single-point feature vector to suppress useless features and reduce feature redundancy. Second, a Global Feature Aggregation (GFA) module was added to aggregate local neighborhood features to capture global context information, thereby improving semantic segmentation accuracy. The experimental results show that the proposed network improves the mean Intersection?over?Union (mIoU) by 1.8 percentage points compared with RandLA-Net (Random sampling and an effective Local feature Aggregator Network) on the point cloud dataset S3DIS, and has good segmentation performance and good adaptability.

Table and Figures | Reference | Related Articles | Metrics